Minimax Risk Bounds in Extreme Value Theory

نویسنده

  • Holger Drees
چکیده

Asymptotic minimax estimators of a positive extreme value index under zero-one loss are investigated in the classical i.i.d. setup. To this end, we prove the weak convergence of suitable local experiments with Pareto distributions as center of localization to a white noise model, which was previously studied in the context of nonparametric local density estimation and regression. From this result we derive upper and lower bounds on the asymptotic minimax risk in the local and in certain global models as well. Finally, the implications for xed-length conndence intervals are discussed. In particular, asymptotic conndence intervals with almost minimal length are constructed, while the popular Hill estimator is shown to yield a little longer conndence intervals.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On Integral Operator and Argument Estimation of a Novel Subclass of Harmonic Univalent Functions

Abstract. In this paper we define and verify a subclass of harmonic univalent functions involving the argument of complex-value functions of the form f = h + ¯g and investigate some properties of this subclass e.g. necessary and sufficient coefficient bounds, extreme points, distortion bounds and Hadamard product.Abstract. In this paper we define and verify a subclass of harmonic univalent func...

متن کامل

Empirical Entropy, Minimax Regret and Minimax Risk

We consider the random design regression with square loss. We propose a method that aggregates empirical minimizers (ERM) over appropriately chosen random subsets and reduces to ERM in the extreme case, and we establish exact oracle inequalities for its risk. We show that, under the −p growth of the empirical -entropy, the excess risk of the proposed method attains the rate n− 2 2+p for p ∈ (0,...

متن کامل

Hardest One-Dimensional Subproblems

For a long time, lower bounds on the difficulty of estimation have been constructed by showing that estimation was difficult even in certain 1-dimensional subproblems. The logical extension of this is to identify hardest one dimensional subproblems and to ask whether these are, either exactly or approximately, as difficult as the full problem. We do this in three settings: estimating linear fun...

متن کامل

Bounds on the Bayes and minimax risk for signal parameter estimation

A 3 r m h estimating the parameter 0 from a parametrized signal problem (with 0 5 0 5 L) observed through Gaussian white noise, four useful and computable lower bounds for the Bayes risk were developed. For problems with different L and Merent signal to noise ratios, some bounds am superior to the others. The lower bound obtained from taking the maximum of the four, serves not only as a good lo...

متن کامل

On the worst and least possible asymptotic dependence

Various methods have been discussed in the literature to evaluate the range of values for a risk measure of a function of random variables. The uncertainty with the chosen dependence risk model makes these bounds worth investigating and the tighter the entire spectrum of values is, the more informative the bounds are. It is natural to believe that the individual risk distributions are known; an...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2001